197 research outputs found

    Investigating the biological relevance in trained embedding representations of protein sequences

    Get PDF
    As genome sequencing is becoming faster and cheaper, an abundance of DNA and protein sequence data is available. However, experimental annotation of structural or functional information develops at a much slower pace. Therefore, machine learning techniques have been widely adopted to make accurate predictions on unseen sequence data. In recent years, deep learning has been gaining popularity, as it allows for effective end-to-end learning. One consideration for its application on sequence data is the choice for a suitable and effective sequence representation strategy. In this paper, we investigate the significance of three common encoding schemes on the multi-label prediction problem of Gene Ontology (GO) term annotation, namely a one-hot encoding, an ad-hoc trainable embedding, and pre-trained protein vectors, using different hyper-parameters. We found that traditional unigram one-hot encodings achieved very good results, only slightly outperformed by unigram ad-hoc trainable embeddings and bigram pre-trained embeddings (by at most 3%for the F maxscore), suggesting the exploration of different encoding strategies to be potentially beneficial. Most interestingly, when analyzing and visualizing the trained embeddings, we found that biologically relevant (dis)similarities between amino acid n-grams were implicitly learned, which were consistent with their physiochemical properties

    RNA-protein binding motifs mining with a new hybrid deep learning based cross-domain knowledge integration approach

    Get PDF
    BACKGROUND: RNAs play key roles in cells through the interactions with proteins known as the RNA-binding proteins (RBP) and their binding motifs enable crucial understanding of the post-transcriptional regulation of RNAs. How the RBPs correctly recognize the target RNAs and why they bind specific positions is still far from clear. Machine learning-based algorithms are widely acknowledged to be capable of speeding up this process. Although many automatic tools have been developed to predict the RNA-protein binding sites from the rapidly growing multi-resource data, e.g. sequence, structure, their domain specific features and formats have posed significant computational challenges. One of current difficulties is that the cross-source shared common knowledge is at a higher abstraction level beyond the observed data, resulting in a low efficiency of direct integration of observed data across domains. The other difficulty is how to interpret the prediction results. Existing approaches tend to terminate after outputting the potential discrete binding sites on the sequences, but how to assemble them into the meaningful binding motifs is a topic worth of further investigation. RESULTS: In viewing of these challenges, we propose a deep learning-based framework (iDeep) by using a novel hybrid convolutional neural network and deep belief network to predict the RBP interaction sites and motifs on RNAs. This new protocol is featured by transforming the original observed data into a high-level abstraction feature space using multiple layers of learning blocks, where the shared representations across different domains are integrated. To validate our iDeep method, we performed experiments on 31 large-scale CLIP-seq datasets, and our results show that by integrating multiple sources of data, the average AUC can be improved by 8% compared to the best single-source-based predictor; and through cross-domain knowledge integration at an abstraction level, it outperforms the state-of-the-art predictors by 6%. Besides the overall enhanced prediction performance, the convolutional neural network module embedded in iDeep is also able to automatically capture the interpretable binding motifs for RBPs. Large-scale experiments demonstrate that these mined binding motifs agree well with the experimentally verified results, suggesting iDeep is a promising approach in the real-world applications. CONCLUSION: The iDeep framework not only can achieve promising performance than the state-of-the-art predictors, but also easily capture interpretable binding motifs. iDeep is available at http://www.csbio.sjtu.edu.cn/bioinf/iDeep ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (doi:10.1186/s12859-017-1561-8) contains supplementary material, which is available to authorized users

    Learning Fast and Slow: PROPEDEUTICA for Real-time Malware Detection

    Full text link
    In this paper, we introduce and evaluate PROPEDEUTICA, a novel methodology and framework for efficient and effective real-time malware detection, leveraging the best of conventional machine learning (ML) and deep learning (DL) algorithms. In PROPEDEUTICA, all software processes in the system start execution subjected to a conventional ML detector for fast classification. If a piece of software receives a borderline classification, it is subjected to further analysis via more performance expensive and more accurate DL methods, via our newly proposed DL algorithm DEEPMALWARE. Further, we introduce delays to the execution of software subjected to deep learning analysis as a way to "buy time" for DL analysis and to rate-limit the impact of possible malware in the system. We evaluated PROPEDEUTICA with a set of 9,115 malware samples and 877 commonly used benign software samples from various categories for the Windows OS. Our results show that the false positive rate for conventional ML methods can reach 20%, and for modern DL methods it is usually below 6%. However, the classification time for DL can be 100X longer than conventional ML methods. PROPEDEUTICA improved the detection F1-score from 77.54% (conventional ML method) to 90.25%, and reduced the detection time by 54.86%. Further, the percentage of software subjected to DL analysis was approximately 40% on average. Further, the application of delays in software subjected to ML reduced the detection time by approximately 10%. Finally, we found and discussed a discrepancy between the detection accuracy offline (analysis after all traces are collected) and on-the-fly (analysis in tandem with trace collection). Our insights show that conventional ML and modern DL-based malware detectors in isolation cannot meet the needs of efficient and effective malware detection: high accuracy, low false positive rate, and short classification time.Comment: 17 pages, 7 figure

    PATROL: Privacy-Oriented Pruning for Collaborative Inference Against Model Inversion Attacks

    Full text link
    Collaborative inference has been a promising solution to enable resource-constrained edge devices to perform inference using state-of-the-art deep neural networks (DNNs). In collaborative inference, the edge device first feeds the input to a partial DNN locally and then uploads the intermediate result to the cloud to complete the inference. However, recent research indicates model inversion attacks (MIAs) can reconstruct input data from intermediate results, posing serious privacy concerns for collaborative inference. Existing perturbation and cryptography techniques are inefficient and unreliable in defending against MIAs while performing accurate inference. This paper provides a viable solution, named PATROL, which develops privacy-oriented pruning to balance privacy, efficiency, and utility of collaborative inference. PATROL takes advantage of the fact that later layers in a DNN can extract more task-specific features. Given limited local resources for collaborative inference, PATROL intends to deploy more layers at the edge based on pruning techniques to enforce task-specific features for inference and reduce task-irrelevant but sensitive features for privacy preservation. To achieve privacy-oriented pruning, PATROL introduces two key components: Lipschitz regularization and adversarial reconstruction training, which increase the reconstruction errors by reducing the stability of MIAs and enhance the target inference model by adversarial training, respectively

    Location, inventory and testing decisions in closed-loop supply chains: a multimedia company

    Get PDF
    Our partnering firm is a Chinese manufacturer of multimedia products that needs guidance developing its imminent Closed-Loop Supply Chain (CLSC). To study this problem, we take into account location, inventory, and testing decisions in a CLSC setting with stochastic demands of new and time-sensitive returned products. Our analysis pays particular attention to the different roles assigned to the reverse Distribution Centers (DCs) and how each option affects the optimal CLSC design. The roles considered are collection and consolidation, additional testing tasks, and direct shipments with no reverse DCs. The problem concerning our partnering firm is formulated as a scenario-based chance-constrained mixed-integer program and it is reformulated to a conic quadratic mixed-integer program that can be solved efficiently via commercial optimization packages. The completeness of the model proposed allows us to develop a decision support tool for the firm and to offer several useful managerial insights. These insights are inferred from our computational experiments using data from the Chinese firm and a second data set based on the U.S. geography. Particularly interesting insights are related to how changes in the reverse flows can impact the forward supply chain and the inventory dynamics concerning the joint DCs.This research is partially supported by the National Natural Science Foundation of China under grants 71771135, 71371106 and 71332005

    Fed-CPrompt: Contrastive Prompt for Rehearsal-Free Federated Continual Learning

    Full text link
    Federated continual learning (FCL) learns incremental tasks over time from confidential datasets distributed across clients. This paper focuses on rehearsal-free FCL, which has severe forgetting issues when learning new tasks due to the lack of access to historical task data. To address this issue, we propose Fed-CPrompt based on prompt learning techniques to obtain task-specific prompts in a communication-efficient way. Fed-CPrompt introduces two key components, asynchronous prompt learning, and contrastive continual loss, to handle asynchronous task arrival and heterogeneous data distributions in FCL, respectively. Extensive experiments demonstrate the effectiveness of Fed-CPrompt in achieving SOTA rehearsal-free FCL performance.Comment: Accepted by FL-ICML 202
    corecore